Rapidly-mixing Markov Chains
نویسنده
چکیده
INTRODUCTION Andrei Andreyevich Markov was a Russian mathematician born in 1856. Markov's main interests were in number theory, continued fractions, and approximation theory. He studied under P. Chebyshev and worked in the field of probability theory. Markov pioneered work in the area of stochastic processes by creating a model called a Markov chain—a random walk on a state space. His work has led to much research in many fields, and many ideas now bear his name, from " hidden Markov models " to " sparse Markov transducers. " Of particular importance is the concept of a Markov chain. A Markov chain defines a method for a random walk on a state space Ω. That is, we have a complete directed graph where each vertex is a possible state of nature, and the weight of an edge represents the probability of moving from one state to another. The adjacency matrix P, where P ij is the probability of moving from state i to state j, is the essence of the Markov chain. The key property of a Markov chain is that the next step depends only on the current location, with no regard to history. The idea of the " chain " is the random walk on the state space. For example, we may start a particular state i. Then, for each state j, we have probability P ij of moving on to state j (note that this probability may be 0, and that it is possible that the probability of staying put may be positive). After each step, we have a new probability distribution on the state space. If we represent this distribution π (t) at time t as a row vector, we have π (t) P = π (t +1). We are interested in Markov chains that are connected, which means that from any state, every other state is reachable with positive probability, and aperiodic, which means basically that there is no partitioning of the state space among which the Markov chain oscillates. If the state space is finite and the Markov chain has these two properties, we say the Markov chain is ergodic. The Fundamental Theorem of Markov Chains says that if the Markov chain is ergodic, the probability distribution of the state space converges to a unique distribution π. Note that πP = π , and for this reason we call π the stationary distribution. One other …
منابع مشابه
Conductance and Rapidly Mixing Markov Chains
Conductance is a measure of a Markov chain that quantifies its tendency to circulate around its states. A Markov chain with low conductance will tend to get ‘stuck’ in a subset of its states whereas one with high conductance will jump around its state space more freely. The mixing time of a Markov chain is the number of steps required for the chain to approach its stationary distribution. There...
متن کاملOn Systematic Scan
In this thesis we study the mixing time of systematic scan Markov chains on finite spin systems. A systematic scan Markov chain is a Markov chain which updates the sites in a deterministic order and this type of Markov chain is often seen as intuitively appealing in terms of implementation to scientists conducting experimental work. Until recently systematic scan Markov chains have largely resi...
متن کاملVery rapidly mixing Markov Chains for 2Δ-colourings and for independent sets in a graph with maximum degree 4
متن کامل
On Markov Chains for Independent Sets
Random independent sets in graphs arise, for example, in statistical physics, in the hard-core model of a gas. In 1997, Luby and Vigoda described a rapidly mixing Markov chain for independent sets, which we refer to as the Luby–Vigoda chain. A new rapidly mixing Markov chain for independent sets is defined in this paper. Using path coupling, we obtain a polynomial upper bound for the mixing tim...
متن کاملRapid Mixing of Several Markov Chains for a Hard-Core Model
The mixing properties of several Markov chains to sample from configurations of a hard-core model have been examined. The model is familiar in the statistical physics of the liquid state and consists of a set of n nonoverlapping particle balls of radius r∗ in a d-dimensional hypercube. Starting from an initial configuration, standard Markov chain monte carlo methods may be employed to generate ...
متن کاملRapidly Mixing Markov Chains: A Comparison of Techniques
For many fundamental sampling problems, the best, and often the only known, approach to solving them is to take a long enough random walk on a certain Markov chain and then return the current state of the chain. Techniques to prove how long “long enough” is, i.e., the number of steps in the chain one needs to take in order to be sufficiently close to the stationary distribution of the chain, ar...
متن کامل